Summation is the operation of adding a sequence of numbers; the result is their sum or total. If numbers are added sequentially from left to right, any intermediate result is a partial sum, prefix sum, or running total of the summation. The numbers to be summed may be integers, rational numbers, real numbers, or complex numbers. Besides numbers, other types of values can be added as well: vectors, matrices, polynomials and, in general, elements of any additive group (or even monoid). For finite sequences of such elements, summation always produces a well-defined sum (possibly by virtue of the convention for empty sums).
Summation of an infinite sequence of values is not always possible, and when a value can be given for an infinite summation, this involves more than just the addition operation, namely also the notion of a limit. Such infinite summations are known as series. Another notion involving limits of finite sums is integration. The term summation has a special meaning related to extrapolation in the context of divergent series.
The summation of the sequence [1, 2, 4, 2] is an expression whose value is the sum of each of the members of the sequence. In the example,1 + 2 + 4 + 2 = 9. Since addition is associative the value does not depend on how the additions are grouped, for instance (1 + 2) + (4 + 2) and 1 + ((2 + 4) + 2) both have the value 9; therefore, parentheses are usually omitted in repeated additions. Addition is also commutative, so permuting the terms of a finite sequence does not change its sum (for infinite summations this property may fail; see absolute convergence for conditions under which it still holds).
There is no special notation for the summation of such explicit sequences, as the corresponding repeated addition expression will do. There is only a slight difficulty if the sequence has fewer than two elements: the summation of a sequence of one term involves no plus sign (it is indistinguishable from the term itself) and the summation of the empty sequence cannot even be written down (but one can write its value "0" in its place). If, however, the terms of the sequence are given by a regular pattern, possibly of variable length, then a summation operator may be useful or even essential. For the summation of the sequence of consecutive integers from 1 to 100 one could use an addition expression involving an ellipsis to indicate the missing terms: 1 + 2 + 3 + ... + 99 + 100. In this case the reader easily guesses the pattern; however, for more complicated patterns, one needs to be precise about the rule used to find successive terms, which can be achieved by using the summation operator "Σ". Using this notation the above summation is written as:
The value of this summation is 5050. It can be found without performing 99 additions, since it can be shown (for instance by mathematical induction) that
for all natural numbers n. More generally, formulas exist for many summations of terms following a regular pattern.
The term "indefinite summation" refers to the search for an inverse image of a given infinite sequence s of values for the forward difference operator, in other words for a sequence, called antidifference of s, whose finite differences are given by s. By contrast, summation as discussed in this article is called "definite summation".
Contents |
Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol ∑ (U+2211), an enlarged form of the upright capital Greek letter Sigma. This is defined thus:
The subscript gives the symbol for an index variable, i. Here, i represents the index of summation; m is the lower bound of summation, and n is the upper bound of summation. Here i = m under the summation symbol means that the index i starts out equal to m. Successive values of i are found by adding 1 to the previous value of i, stopping when i = n. An example:
Informal writing sometimes omits the definition of the index and bounds of summation when these are clear from context, as in
One often sees generalizations of this notation in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. For example:
is the sum of f(k) over all (integer) k in the specified range,
is the sum of f(x) over all elements x in the set S, and
is the sum of μ(d) over all positive integers d dividing n.[1]
There are also ways to generalize the use of many sigma signs. For example,
is the same as
A similar notation is applied when it comes to denoting the product of a sequence, which is similar to its summation, but which uses the multiplication operation instead of addition (and gives 1 for an empty sequence instead of 0). The same basic structure is used, with ∏, an enlarged form of the Greek capital letter Pi, replacing the ∑.
It is possible to sum fewer than 2 numbers:
These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case. For example, if m = n in the definition above, then there is only one term in the sum; if m > n, then there is none.
If the iterated function notation is defined e.g. and is considered a more primitive notation then summation can be defined in terms of iterated functions as:
Where the curly braces define a 2-tuple and the right arrow is a function definition taking a 2-tuple to 2-tuple. The function is applied b-a+1 times on the tuple {a,0}.
In the notation of measure and integration theory, a sum can be expressed as a definite integral,
where [a,b] is the subset of the integers from a to b, and where μ is the counting measure.
Indefinite sums can be used to calculate definite sums with the formula[2]:
Many such approximations can be obtained by the following connection between sums and integrals, which holds for any:
increasing function f:
decreasing function f:
For more general approximations, see the Euler–Maclaurin formula.
For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance
since the right hand side is by definition the limit for of the left hand side. However for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f: it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral.
The formulas below involve finite sums; for infinite summations see list of mathematical series
The following formulas are manipulations of generalized to begin a series at any natural number value (i.e., ):
In the summations below x is a constant not equal to 1
There exist enormously many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are the following.
The following are useful approximations (using theta notation):